339 research outputs found
Genetic Programming for Multibiometrics
Biometric systems suffer from some drawbacks: a biometric system can provide
in general good performances except with some individuals as its performance
depends highly on the quality of the capture. One solution to solve some of
these problems is to use multibiometrics where different biometric systems are
combined together (multiple captures of the same biometric modality, multiple
feature extraction algorithms, multiple biometric modalities...). In this
paper, we are interested in score level fusion functions application (i.e., we
use a multibiometric authentication scheme which accept or deny the claimant
for using an application). In the state of the art, the weighted sum of scores
(which is a linear classifier) and the use of an SVM (which is a non linear
classifier) provided by different biometric systems provide one of the best
performances. We present a new method based on the use of genetic programming
giving similar or better performances (depending on the complexity of the
database). We derive a score fusion function by assembling some classical
primitives functions (+, *, -, ...). We have validated the proposed method on
three significant biometric benchmark datasets from the state of the art
Performance Evaluation of Biometric Template Update
Template update allows to modify the biometric reference of a user while he
uses the biometric system. With such kind of mechanism we expect the biometric
system uses always an up to date representation of the user, by capturing his
intra-class (temporary or permanent) variability. Although several studies
exist in the literature, there is no commonly adopted evaluation scheme. This
does not ease the comparison of the different systems of the literature. In
this paper, we show that using different evaluation procedures can lead in
different, and contradictory, interpretations of the results. We use a
keystroke dynamics (which is a modality suffering of template ageing quickly)
template update system on a dataset consisting of height different sessions to
illustrate this point. Even if we do not answer to this problematic, it shows
that it is necessary to normalize the template update evaluation procedures.Comment: International Biometric Performance Testing Conference 2012,
Gaithersburg, MD, USA : United States (2012
Hybrid Template Update System for Unimodal Biometric Systems
Semi-supervised template update systems allow to automatically take into
account the intra-class variability of the biometric data over time. Such
systems can be inefficient by including too many impostor's samples or skipping
too many genuine's samples. In the first case, the biometric reference drifts
from the real biometric data and attracts more often impostors. In the second
case, the biometric reference does not evolve quickly enough and also
progressively drifts from the real biometric data. We propose a hybrid system
using several biometric sub-references in order to increase per- formance of
self-update systems by reducing the previously cited errors. The proposition is
validated for a keystroke- dynamics authentication system (this modality
suffers of high variability over time) on two consequent datasets from the
state of the art.Comment: IEEE International Conference on Biometrics: Theory, Applications and
Systems (BTAS 2012), Washington, District of Columbia, USA : France (2012
Fast computation of the performance evaluation of biometric systems: application to multibiometric
The performance evaluation of biometric systems is a crucial step when
designing and evaluating such systems. The evaluation process uses the Equal
Error Rate (EER) metric proposed by the International Organization for
Standardization (ISO/IEC). The EER metric is a powerful metric which allows
easily comparing and evaluating biometric systems. However, the computation
time of the EER is, most of the time, very intensive. In this paper, we propose
a fast method which computes an approximated value of the EER. We illustrate
the benefit of the proposed method on two applications: the computing of non
parametric confidence intervals and the use of genetic algorithms to compute
the parameters of fusion functions. Experimental results show the superiority
of the proposed EER approximation method in term of computing time, and the
interest of its use to reduce the learning of parameters with genetic
algorithms. The proposed method opens new perspectives for the development of
secure multibiometrics systems by speeding up their computation time.Comment: Future Generation Computer Systems (2012
Web-Based Benchmark for Keystroke Dynamics Biometric Systems: A Statistical Analysis
Most keystroke dynamics studies have been evaluated using a specific kind of
dataset in which users type an imposed login and password. Moreover, these
studies are optimistics since most of them use different acquisition protocols,
private datasets, controlled environment, etc. In order to enhance the accuracy
of keystroke dynamics' performance, the main contribution of this paper is
twofold. First, we provide a new kind of dataset in which users have typed both
an imposed and a chosen pairs of logins and passwords. In addition, the
keystroke dynamics samples are collected in a web-based uncontrolled
environment (OS, keyboards, browser, etc.). Such kind of dataset is important
since it provides us more realistic results of keystroke dynamics' performance
in comparison to the literature (controlled environment, etc.). Second, we
present a statistical analysis of well known assertions such as the
relationship between performance and password size, impact of fusion schemes on
system overall performance, and others such as the relationship between
performance and entropy. We put into obviousness in this paper some new results
on keystroke dynamics in realistic conditions.Comment: The Eighth International Conference on Intelligent Information Hiding
and Multimedia Signal Processing (IIHMSP 2012), Piraeus : Greece (2012
Image Watermaking With Biometric Data For Copyright Protection
In this paper, we deal with the proof of ownership or legitimate usage of a
digital content, such as an image, in order to tackle the illegitimate copy.
The proposed scheme based on the combination of the watermark-ing and
cancelable biometrics does not require a trusted third party, all the exchanges
are between the provider and the customer. The use of cancelable biometrics
permits to provide a privacy compliant proof of identity. We illustrate the
robustness of this method against intentional and unintentional attacks of the
watermarked content
My Behavior is my Privacy & Secure Password !
International audienceMany studies propose strong user authentication based on biometric modalities. However, they often either, assume a trusted component, are modality-dependant, use only one biometric modality, are reversible , or does not enable the service to adapt the security on-the-fly. A recent work [1] introduced the concept of Personal Identity Code Respecting Privacy (PICRP), a non-cryptographic and non-reversible signature computed from any arbitrary information. In this paper, we extend this concept with the use of Keystroke Dynamics, IP and GPS geo-location by optimizing the pre-processing and merging of collected information. We demonstrate the performance of the proposed approach through experimental results and we present an example of its usage
Evaluation of Biometric Systems
International audienceBiometrics is considered as a promising solution among traditional methods based on "what we own" (such as a key) or "what we know" (such as a password). It is based on "what we are" and "how we behave". Few people know that biometrics have been used for ages for identification or signature purposes. In 1928 for example, fingerprints were used for women clerical employees of Los Angeles police department as depicted in Figure 1. Fingerprints were also already used as a signature for commercial exchanges in Babylon (-3000 before JC). Alphonse Bertillon proposed in 1879 to use anthropometric information for police investigation. Nowadays, all police forces in the world use this kind of information to resolve crimes. The first prototypes of terminals providing an automatic processing of the voice and digital fingerprints have been defined in the middle of the years 1970. Nowadays, biometric authentication systems have many applications [1]: border control, e-commerce, etc. The main benefits of this technology are to provide a better security, and to facilitate the authentication process for a user. Also, it is usually difficult to copy the biometric characteristics of an individual than most of other authentication methods such as passwords. Despite the obvious advantages of biometric systems, their proliferation was not as much as attended. The main drawback is the uncertainty of the verification result. By contrast to password checking, the verification of biometric raw data is subject to errors and represented by a similarity percentage (100% is never reached). Others drawbacks related to vulnerabilities and usability issues exist. In addition, in order to be used in an industrial context, the quality of a biometric system must be precisely quantified. We need a reliable evaluation methodology in order to put into obviousness the benefit of a new biometric system. Moreover, many questions remain: Shall we be confident in this technology? What kind of biometric modalities can be used? What are the trends in this domain? The objective of this chapter is to answer these questions, by presenting an evaluation methodology of biometric systems
Identifying individuals from average quality fingerprint reference templates, when the best do not provide the best results !
International audienceThe fingerprint is one of the most used biometric modalities because of its persistence, uniqueness characteristics and ease of acquisition. Nowadays, there are large country-sized fingerprint databases for identification purposes, for border access controls and also for Visa issuance procedures around the world. The objective usually is to identify an input fingerprint among a large fingerprint database. In order to achieve this goal, different fingerprint pre-selection, classification or indexing techniques have been developed to speed up the research process to avoid comparison of the input fingerprint template against each fingerprint in the database. Although these methods are fairly accurate for identification process, we think that all of them assume the hypothesis to have a good quality of the fingerprint template for the first step of enrollment. In this paper, we show how the quality of reference templates can impact the performance of identification algorithms. We collect information and implement differents methods from the state of the art of fingerprint identification. Then, for these differents methods, we vary the quality of reference templates by using NFIQ2 metric quality. This allowed us to build a benchmark in order to evaluate the impact of these different enrollment scenarios on identification
Comparative Study of Fingerprint Database Indexing Methods
International audienceNowadays, there are large country-sized fingerprint databases for identification purposes, for border access controls and also for Visa issuance procedures around the world. Fingerprint indexing techniques aim to speed up the research process in automatic fingerprint identification systems. Therefore, several preselection, classification and indexing techniques have been proposed in the literature. However, the proposed systems have been evaluated with different experimental protocols, that makes it difficult to assess their performances. The main objective of this paper is to provide a comparative study of fingerprint indexing methods using a common experimental protocol. Four fingerprint indexing methods, using naive, cascade, matcher and Minutiae Cylinder Code (MCC) approaches are evaluated on FVC databases from the Fingerprint Verification Competition (FVC) using the Cumulative Matches Curve (CMC) and for the first time using also the computing time required. Our study shows that MCC gives the best compromise between identification accuracy and computation time
- …